The need for AI regulations resurfaces | HCLTech

The need for AI regulations resurfaces

As the UK gears up to host a two-day AI summit, the US introduces new rules related to privacy and security
 
9 minutes read
Jaydeep Saha
Jaydeep Saha
Global Reporter, HCLTech
9 minutes read
The need for AI regulations resurfaces

With the widespread availability and use of artificial intelligence (AI) tools, cybersecurity officials at government and organizational levels are in a never-ending cat-and-mouse game with cybercriminals.

While there’s no denying how AI, especially generative AI (GenAI), has been a force for good across industries, important leaders like UK Prime Minister Rishi Sunak and US President Joe Biden are now joining the chorus.

Before examining the need for AI regulation, let’s first look at what the UK government recently released Safety and Security Risks of Generative Artificial Intelligence to 2025 report highlighted:

It said GenAI development has the potential to bring significant global benefits like productivity and innovation across many sectors, including healthcare, finance and IT, but will also increase risks to safety and security by enhancing threat actor capabilities and increasing the effectiveness of attacks.

The enhancement of terrorist capabilities ranges from propaganda, radicalization and recruitment to funding streams, weapons development, attack planning and assembling knowledge on physical attacks by non-state violent actors, including chemical, biological and radiological weapons.

Among cybercriminal activities AI capabilities can now help to plan and carry out cyberattacks, increase fraud, impersonation, ransomware attacks, currency theft, data harvesting, voice cloning, deepfakes and increase child sexual abuse images, it added.

Following the publication of the UK government report, Sunak warned that “AI could ease the process of building chemical and biological weapons and society could lose all control over AI, preventing it from being switched off in a worst-case scenario.”

Sunak’s Thursday speech set out the capabilities and potential risks posed by AI, including cyberattacks, fraud and child sexual abuse and the risks that AI could be used by terrorist groups “to spread fear and disruption on an even greater scale.”

However, he mitigated the risk of human extinction from AI as he thinks it should be a “global priority” while being generally “optimistic” about the potential of AI to transform people’s lives for the better. He said: “This is not a risk that people need to be losing sleep over right now and I don’t want to be alarmist.”

Three days after the UK PM statement, President Biden on Monday said AI developers will now need to share safety results with the US government as a White House announcement issued through an executive order as “the most significant actions ever taken by any government to advance the field of AI safety.”

The executive order also includes steps like developing standards, tools and tests to help ensure that AI systems are safe, secure and trustworthy; protecting against the risks of using AI to engineer dangerous biological materials, protecting Americans from AI-enabled fraud and deception by establishing standards and establishing advanced cybersecurity defense to develop AI tools to find and fix vulnerabilities in critical software.

While the US president is among leading names like French President Emmanuel Macron and Canadian PM Justin Trudeau who have decided not to attend the two-day AI summit at the Bletchley Park that UK PM Sunak is holding, US Vice President Kamala Harris, Italian PM Giorgia Meloni, European Commission president Ursula von der Leyen, UN Secretary-General Antonio Guterres and “godfathers” of modern AI Geoffrey Hinton and Yoshua Bengio will be there.

Reasons behind this rising fear

Among many such reports in the past, Sapio Research ‘Generative AI and Cybersecurity: Bright Future or Business Battleground?’ that surveyed more than 650 senior security operations professionals in the US, recently mentioned that 85% of security personnel attributed the rise in cyberattacks (75% having witnessed increase in last one year) to bad actors using GenAI.

The report went on to add that 70% of security professionals have said that GenAI has been positively impacting employee productivity and engagement and 63% vouch for the technology’s ability to improve employee morale.

HCLTech supercharges demerger for UD Trucks, powered by scaled digital transformation

Watch the video

HCLTech in the AI arena

As GenAI is reshaping the technology landscape, HCLTech — with decades of experience in machine learning and neural networks — has unlocked new opportunities, improved operational efficiency and delivered cutting-edge solutions to help businesses with task automation, cybersecurity enhancement and fostering innovation, thereby meeting the evolving demands of the digital era.

HCLTech is committed to ensuring that its AI solutions are practical, ethical and compliant with statutory regulations. The solutions utilize an ethical three-dimensional framework that focuses on data governance, model governance and process governance. This ensures the trained data remains unbiased, the model follows transparency and explain-ability guidelines and has humans in the loop for oversight.

As HCLTech believes that AI regulations must be prioritized by governments and organizations to ensure the unchecked rise of AI is controlled, this framework is designed to ensure that AI solutions are developed in a responsible manner, with a focus on transparency, accountability and fairness.

For over a year now, HCLTech Trends and Insights (T&I) has been raising awareness and reporting on the worldwide developments of AI and concerns related to it. T&I has dug deep into almost everything that happened in the last year. Here are some of the related contributions.

Sept 21, 2022: AI is a strong pillar for the new age of cybersecurity

Jan 30: The unknown fear behind the growing use of AI

Feb 1: The lull before the cyberattack storm

Feb 7: How to protect orgs as cybersecurity concerns grow in the UK

Feb 24: DA & AI among tools the UK uses to hunt down fraudsters

Mar 1: Regulation needed to keep Generative AI tools in check

Mar 3: 2023: The age of augmented intelligence

Mar 17: A new generation of transformers is rising

Mar 23: AI-n’t you scared? You should be!

May 9: Despite recent warnings, a regulated AI future to bring optimism

May 23: What AI has in store for you in the next five years

Jun 13: The benefits of AI in the healthcare industry

Jun 14: The importance of AI regulations

Aug 16: Zero-trust cybersecurity: A must for today’s data centers

Aug 17: The rise of AI-powered robotics in Europe

Sept 8: Generative AI transforming the healthcare industry

AI-powered automation has already changed the nature of daily work in factories, hospitals, education, financial services and many more, but has yet to entirely remove human input, the UK PM mentioned AI tools efficiently doing admin tasks that traditionally used to be carried out by humans. Sunak insisted it was too simple to say AI would “take people’s jobs,” instead urged the public to view the tech as a “co-pilot” at workplace.

HCLTech’s Dynamic 365 Copilot – an AI-powered assistant – is helping users in sales service, marketing and supply chain management to be more organized and productive. In addition, HCLTech’s GenAI labs, which support teams in building solutions and services across various roles and domains, will also drive a GenAI skills academy to train people on best usage of GenAI.

Want to learn more about the potential of GenAI and what HCLTech has been doing in this area? Here’s Kalyan Kumar, Global Chief Technology Officer and Head (Ecosystems) at HCLTech decoding the impact of GenAI on IT services.

Share On